Goto

Collaborating Authors

 Newport County


Why Millennials Love Prenups

The New Yorker

Long the province of the ultra-wealthy, prenuptial agreements are being embraced by young people--including many who don't have all that much to divvy up. More than forty per cent of millennials and Gen Z-ers claim to have signed a prenup. Andrea Zevallos declared 2016 her "year of dating." She was twenty-seven, working at Universal Studios Hollywood, the theme park, and determined to find love. She calculated it would take three dates a week. By December, she was losing hope. "It was exhausting," she said. Then, while scrolling OkCupid, she noticed a "cute guy" with a "Hamilton" reference in his handle. His name was Alex Switzky, and like her he was a musical-theatre enthusiast and aspiring screenwriter. He was different from the other men she'd met. On their second date, he started planning a third. Zevallos "was used to L.A. guys cagey about any sort of calendar." One day, Switzky called her. Accustomed to texts, she assumed that he was about to break up with her. "The most millennial response," she recalled, laughing.


Becoming a Centenarian

The New Yorker

Like The New Yorker, I was born in 1925. Somewhat to my surprise, I decided to keep a journal of my hundredth year. The author, who was born on December 17, 1925, notes that the magazine's first issue came out ten months before he did. Old age is no joke, but it can feel like one. You look everywhere for your glasses, until your wife points out that you're wearing them. I turn a hundred this year. People act as though this is an achievement, and I suppose it is, sort of. Nobody in my family has lived this long, and I've been lucky. I'm still in pretty good health, no wasting diseases or Alzheimer's, and friends and strangers comment on how young I look, which cues me to cite the three ages of man: Youth, Maturity, and You Look Great. On the other hand, I've lost so many useful abilities that my wife, Dodie, and I have taken to calling me Feebleman. Look, up in the sky! No, it's Dodie doesn't want me to know how old she is, but she's nearly three decades younger than I am, and I become ...


A Robot Simulation Environment for Virtual Reality Enhanced Underwater Manipulation and Seabed Intervention Tasks

El-Muftu, Sumey, Gur, Berke

arXiv.org Artificial Intelligence

A Robot Simulation Environment for Virtual Reality Enhanced Underwater Manipulation and Seabed Intervention T asks* Sumey El-M uft u 1 and Berke G ur 2 Abstract -- This paper presents the (MARUN) 2 underwater robotic simulator . The simulator architecture enables seamless integration with the ROS-based mission software and web-based user interface of URSULA, a squid inspired biomimetic robot designed for dexterous underwater manipulation and seabed intervention tasks. Utilizing Unity as the simulation environment enables the integration of virtual reality and haptic feedback capabilities for a more immersive and realistic experience for improved operator dexterity and experience. The utility of the simulator and improved dexterity provided by the VR module is validated through user experiments. I. INTRODUCTION Advancements in underwater robotic manipulation have paved the way for remote teleoperation and intervention in challenging aquatic environments. Several well-publicized recent developments have emphasized the increasing importance of dexterous underwater manipulation and intervention capabilities, in particular, for vehicles operating close to the seabed. In line with these developments, novel underwater robots specifically designed for such tasks have emerged over the recent years [1]-[3], including project URSULA.


Chain-of-Thought Reasoning In The Wild Is Not Always Faithful

Arcuschin, Iván, Janiak, Jett, Krzyzanowski, Robert, Rajamanoharan, Senthooran, Nanda, Neel, Conmy, Arthur

arXiv.org Artificial Intelligence

Chain-of-Thought (CoT) reasoning has significantly advanced state-of-the-art AI capabilities. However, recent studies have shown that CoT reasoning is not always faithful, i.e. CoT reasoning does not always reflect how models arrive at conclusions. So far, most of these studies have focused on unfaithfulness in unnatural contexts where an explicit bias has been introduced. In contrast, we show that unfaithful CoT can occur on realistic prompts with no artificial bias. Our results reveal non-negligible rates of several forms of unfaithful reasoning in frontier models: Sonnet 3.7 (16.3%), DeepSeek R1 (5.3%) and ChatGPT-4o (7.0%) all answer a notable proportion of question pairs unfaithfully. Specifically, we find that models rationalize their implicit biases in answers to binary questions ("implicit post-hoc rationalization"). For example, when separately presented with the questions "Is X bigger than Y?" and "Is Y bigger than X?", models sometimes produce superficially coherent arguments to justify answering Yes to both questions or No to both questions, despite such responses being logically contradictory. We also investigate restoration errors (Dziri et al., 2023), where models make and then silently correct errors in their reasoning, and unfaithful shortcuts, where models use clearly illogical reasoning to simplify solving problems in Putnam questions (a hard benchmark). Our findings raise challenges for AI safety work that relies on monitoring CoT to detect undesired behavior.


ViewVR: Visual Feedback Modes to Achieve Quality of VR-based Telemanipulation

Erkhov, A., Bazhenov, A., Satsevich, S., Belov, D., Khabibullin, F., Egorov, S., Gromakov, M., Cabrera, M. Altamirano, Tsetserukou, D.

arXiv.org Artificial Intelligence

Abstract--The paper focuses on an immersive teleoperation system that enhances operator's ability to actively perceive the robot's surroundings. A consumer-grade HTC Vive VR system was used to synchronize the operator's hand and head movements with a UR3 robot and a custom-built robotic head with two degrees of freedom (2-DoF). The system's usability, manipulation efficiency, and intuitiveness of control were evaluated in comparison with static head camera positioning across three distinct tasks. Teleoperation plays a pivotal role in robotics by enabling efficient data collection for learning from demonstrations. The quality of collected data heavily depends on the operator's ability to intuitively control the system and receive adaptive visual feedback.


Probing Language Models on Their Knowledge Source

Tighidet, Zineddine, Mogini, Andrea, Mei, Jiali, Piwowarski, Benjamin, Gallinari, Patrick

arXiv.org Artificial Intelligence

Large Language Models (LLMs) often encounter conflicts between their learned, internal (parametric knowledge, PK) and external knowledge provided during inference (contextual knowledge, CK). Understanding how LLMs models prioritize one knowledge source over the other remains a challenge. In this paper, we propose a novel probing framework to explore the mechanisms governing the selection between PK and CK in LLMs. Using controlled prompts designed to contradict the model's PK, we demonstrate that specific model activations are indicative of the knowledge source employed. We evaluate this framework on various LLMs of different sizes and demonstrate that mid-layer activations, particularly those related to relations in the input, are crucial in predicting knowledge source selection, paving the way for more reliable models capable of handling knowledge conflicts effectively.


TiltXter: CNN-based Electro-tactile Rendering of Tilt Angle for Telemanipulation of Pasteur Pipettes

Cabrera, Miguel Altamirano, Tirado, Jonathan, Fedoseev, Aleksey, Sautenkov, Oleg, Poliakov, Vladimir, Kopanev, Pavel, Tsetserukou, Dzmitry

arXiv.org Artificial Intelligence

The shape of deformable objects can change drastically during grasping by robotic grippers, causing an ambiguous perception of their alignment and hence resulting in errors in robot positioning and telemanipulation. Rendering clear tactile patterns is fundamental to increasing users' precision and dexterity through tactile haptic feedback during telemanipulation. Therefore, different methods have to be studied to decode the sensors' data into haptic stimuli. This work presents a telemanipulation system for plastic pipettes that consists of a Force Dimension Omega.7 haptic interface endowed with two electro-stimulation arrays and two tactile sensor arrays embedded in the 2-finger Robotiq gripper. We propose a novel approach based on convolutional neural networks (CNN) to detect the tilt of deformable objects. The CNN generates a tactile pattern based on recognized tilt data to render further electro-tactile stimuli provided to the user during the telemanipulation. The study has shown that using the CNN algorithm, tilt recognition by users increased from 23.13\% with the downsized data to 57.9%, and the success rate during teleoperation increased from 53.12% using the downsized data to 92.18% using the tactile patterns generated by the CNN.


AmbigDocs: Reasoning across Documents on Different Entities under the Same Name

Lee, Yoonsang, Ye, Xi, Choi, Eunsol

arXiv.org Artificial Intelligence

Different entities with the same name can be difficult to distinguish. Handling confusing entity mentions is a crucial skill for language models (LMs). For example, given the question "Where was Michael Jordan educated?" and a set of documents discussing different people named Michael Jordan, can LMs distinguish entity mentions to generate a cohesive answer to the question? To test this ability, we introduce a new benchmark, AmbigDocs. By leveraging Wikipedia's disambiguation pages, we identify a set of documents, belonging to different entities who share an ambiguous name. From these documents, we generate questions containing an ambiguous name and their corresponding sets of answers. Our analysis reveals that current state-of-the-art models often yield ambiguous answers or incorrectly merge information belonging to different entities. We establish an ontology categorizing four types of incomplete answers and automatic evaluation metrics to identify such categories. We lay the foundation for future work on reasoning across multiple documents with ambiguous entities.


Open-Ended Wargames with Large Language Models

Hogan, Daniel P., Brennen, Andrea

arXiv.org Artificial Intelligence

Wargames are a powerful tool for understanding and rehearsing real-world decision making. Automated play of wargames using artificial intelligence (AI) enables possibilities beyond those of human-conducted games, such as playing the game many times over to see a range of possible outcomes. There are two categories of wargames: quantitative games, with discrete types of moves, and qualitative games, which revolve around open-ended responses. Historically, automation efforts have focused on quantitative games, but large language models (LLMs) make it possible to automate qualitative wargames. We introduce "Snow Globe," an LLM-powered multi-agent system for playing qualitative wargames. With Snow Globe, every stage of a text-based qualitative wargame from scenario preparation to post-game analysis can be optionally carried out by AI, humans, or a combination thereof. We describe its software architecture conceptually and release an open-source implementation alongside this publication. As case studies, we simulate a tabletop exercise about an AI incident response and a political wargame about a geopolitical crisis. We discuss potential applications of the approach and how it fits into the broader wargaming ecosystem.


Diffusion-Reinforcement Learning Hierarchical Motion Planning in Adversarial Multi-agent Games

Wu, Zixuan, Ye, Sean, Natarajan, Manisha, Gombolay, Matthew C.

arXiv.org Artificial Intelligence

Reinforcement Learning- (RL-)based motion planning has recently shown the potential to outperform traditional approaches from autonomous navigation to robot manipulation. In this work, we focus on a motion planning task for an evasive target in a partially observable multi-agent adversarial pursuit-evasion games (PEG). These pursuit-evasion problems are relevant to various applications, such as search and rescue operations and surveillance robots, where robots must effectively plan their actions to gather intelligence or accomplish mission tasks while avoiding detection or capture themselves. We propose a hierarchical architecture that integrates a high-level diffusion model to plan global paths responsive to environment data while a low-level RL algorithm reasons about evasive versus global path-following behavior. Our approach outperforms baselines by 51.2% by leveraging the diffusion model to guide the RL algorithm for more efficient exploration and improves the explanability and predictability.